418 research outputs found
A Repetition Test for Pseudo-Random Number Generators
A new statistical test for uniform pseudo-random number generators (PRNGs) is presented. The idea is that a sequence of pseudo-random numbers should have numbers reappear with a certain probability. The expectation time that a repetition occurs provides the metric for the test. For linear congruential generators (LCGs) failure can be shown theoretically. Empirical test results for a number of commonly used PRNGs are reported, showing that some PRNGs considered to have good statistical properties fail. A sample implementation of the test is provided over the Interne
A model for capturing and tracing architectural designs
Software architecture constitutes the primary design of a software system. Consequently, architectural design decisions involved in architecture design have a key impact on the system in such aspects as future maintenance costs, resulting quality, and timeliness. However, the applied knowledge employed and the design decisions taken by software architects are not explicitly represented in the design despite their important role; consequently, they remain in the mind of designers and are lost with time. In this work, a model for capturing and tracing the products and architectural design decisions involved in software architecture design processes is proposed. An operational perspective is considered in which design decisions can be modelled by means of design operations. The basic ontology of situation calculus is adopted to formally model the evolution of a software architecture.1st International Workshop on Advanced Software Engineering: Expanding the Frontiers of Software Technology - Session 1: Software Architecture.Red de Universidades con Carreras en Informátic
A model for documenting architectural decisions
A software architecture is the result of architectural design decisions.
Documenting a software architecture should not only describe the final model, but also why the architecture looks as it does. During the software architecture design process, several decisions are made, which need to be captured and documented in a systematic way to prevent knowledge vaporization and high architecture’ costs of change. In this work, a model for capturing, documenting and recovering architectural design processes and their underlying design rationale is proposed. The design process is envisioned under an operational perspective, where design decisions are represented as sequences of operations. Besides, the model is extensible to manage several design products types from different domains and views, including aspects of architectural rationale. Complementary, the proposal provides a semi-automatic mechanism for generating architectural rationale documents based on the use of templates.Sociedad Argentina de Informática e Investigación Operativa (SADIO
A model for capturing and tracing architectural designs
Software architecture constitutes the primary design of a software system. Consequently, architectural design decisions involved in architecture design have a key impact on the system in such aspects as future maintenance costs, resulting quality, and timeliness. However, the applied knowledge employed and the design decisions taken by software architects are not explicitly represented in the design despite their important role; consequently, they remain in the mind of designers and are lost with time. In this work, a model for capturing and tracing the products and architectural design decisions involved in software architecture design processes is proposed. An operational perspective is considered in which design decisions can be modelled by means of design operations. The basic ontology of situation calculus is adopted to formally model the evolution of a software architecture.1st International Workshop on Advanced Software Engineering: Expanding the Frontiers of Software Technology - Session 1: Software Architecture.Red de Universidades con Carreras en Informátic
Electromagnetic plasma modeling in circuit breaker within the finite volume method
In order to ensure the galvanic isolation of an electrical system following a manual
operation or a default strike, current limitation properties of the electric arc are used, forcing a
fast decrease to zero current. Modeling this process reveals complex, since it involves a large
amount of physical phenomena (radiation, phase transitions, electromagnetism, fluid
dynamics, plasma physics). In order to get a robust solving, enhancing strongly coupled
resolution and time constants compatibility, the Finite Volume Method has been chosen. This
method was first implemented on intrinsic electromagnetism problems (current flow,
magnetostatics including non-linear materials, and magnetodynamics). Once validated, the
models have been successfully used in the Schneider's current-interruption dedicated
software, thus allowing a significantly improved simulation of Schneider Electric circuit
breakers
SWIFT: Using task-based parallelism, fully asynchronous communication, and graph partition-based domain decomposition for strong scaling on more than 100,000 cores
We present a new open-source cosmological code, called SWIFT, designed to solve the equations of hydrodynamics using a particle-based approach (Smooth Particle Hydrodynamics) on hybrid shared / distributed-memory architectures. SWIFT was designed from the bottom up to provide excellent strong scaling on both commodity clusters (Tier-2 systems) and Top100-supercomputers (Tier-0 systems), without relying on architecture-specific features or specialized accelerator hardware. This performance is due to three main computational approaches: • Task-based parallelism for shared-memory parallelism, which provides fine-grained load balancing and thus strong scaling on large numbers of cores. • Graph-based domain decomposition, which uses the task graph to decompose the simulation domain such that the work, as opposed to just the data, as is the case with most partitioning schemes, is equally distributed across all nodes. • Fully dynamic and asynchronous communication, in which communication is modelled as just another task in the task-based scheme, sending data whenever it is ready and deferring on tasks that rely on data from other nodes until it arrives. In order to use these approaches, the code had to be re-written from scratch, and the algorithms therein adapted to the task-based paradigm. As a result, we can show upwards of 60% parallel efficiency for moderate-sized problems when increasing the number of cores 512-fold, on both x86-based and Power8-based architectures
Shaping Biological Knowledge: Applications in Proteomics
The central dogma of molecular biology has provided a meaningful principle
for data integration in the field of genomics. In this context, integration reflects
the known transitions from a chromosome to a protein sequence: transcription,
intron splicing, exon assembly and translation. There is no such clear principle for
integrating proteomics data, since the laws governing protein folding and interactivity
are not quite understood. In our effort to bring together independent pieces of
information relative to proteins in a biologically meaningful way, we assess the bias of
bioinformatics resources and consequent approximations in the framework of small-scale
studies. We analyse proteomics data while following both a data-driven (focus
on proteins smaller than 10 kDa) and a hypothesis-driven (focus on whole bacterial
proteomes) approach. These applications are potentially the source of specialized
complements to classical biological ontologies
Trazabilidad de Procesos Scrum
La trazabilidad es considerada en metodologías ágiles como un aspecto fundamental a estudiar para desarrollar sistemas de calidad. Sin embargo, los procesos ágiles ocurren en entornos donde no es frecuente encontrar un documento de especificación de requerimientos, no siendo posible aplicar técnicas clásicas de trazabilidad. En consecuencia, en este trabajo se propone un modelo de trazabilidad basado en las prácticas de Scrum. El objetivo principal del modelo es representar trazas existentes entre los artefactos generados durante procesos Scrum. La propuesta es especializada y ejemplificada siguiendo la documentación del proceso de desarrollo de Moodle.Sociedad Argentina de Informática e Investigación Operativa (SADIO
Trazabilidad de procesos ágiles: un modelo para la trazabilidad de procesos scrum
La trazabilidad es considerada en metodologías ágiles como un aspecto esencial a incorporar para la producción de software de calidad. Sin embargo, los procesos de desarrollo ágiles en contraposición a los procesos de desarrollo “pesados”, no permiten la aplicación directa de las técnicas de trazabilidad tradicionales. En consecuencia, es fundamental desarrollar modelos que permitan trazar los requerimientos bajo el enfoque de los métodos ágiles.
En este trabajo se aborda esta problemática centrada en la metodología ágil Scrum. El modelo propuesto es desarrollado con el objetivo de brindar soporte a las siguientes preguntas de competencia: i) ¿Qué eventos originaron un artefacto en particular?; ii) ¿Qué requerimientos guiaron la generación de tal artefacto?; ¿Quiénes son los participantes involucrados en un evento dado? Las respuestas a estar preguntas asistirían las tareas de trazabilidad en proyectos ágiles tales como: Stakeholders con Requerimientos, User Stories con Versiones y Requerimientos con Requerimientos.WIS - X Workshop ingeniería de softwareRed de Universidades con Carreras en Informática (RedUNCI
Exadat: Análisis de Variabilidad en Persistencia de Productos de Software
Existe una tendencia mundial hacia el desarrollo y evolución de familias de productos en lugar de la creación de un producto de software para un cliente específico. Sin embargo, es común la construcción de tal familia a partir de varios sistemas a medida. Además, estas implementaciones usualmente carecen de la documentación de la arquitectura implementada. A partir de esta problemática se plantea en este trabajo la identificación de familia de productos de software desde una perspectiva de persistencia. El enfoque propuesto utiliza mecanismos de ingeniería reversa para reconstruir la arquitectura del producto implementado. A partir de la arquitectura se identifican posibles puntos de variación y variantes implicadas en la persistencia de datos.Sociedad Argentina de Informática e Investigación Operativ
- …